Yolov6、Yolov7重参数化 RepConv 详解(代码可直接使用) 您所在的位置:网站首页 fusing hospital怎么读 Yolov6、Yolov7重参数化 RepConv 详解(代码可直接使用)

Yolov6、Yolov7重参数化 RepConv 详解(代码可直接使用)

2024-06-18 12:50| 来源: 网络整理| 查看: 265

1. RepConv简介

        RepConv是一种模型重参化技术,它可以在推理阶段将多个计算模块合并为一个,提高模型的效率和性能。它最初是用于VGG网络的,但后来也被应用到其他网络结构,如ResNet和DenseNet。RepConv的主要思想是在训练时使用多分支的卷积层,然后在推理时将分支的参数重参数化到主分支上,从而减少计算量和内存消耗。RepConv在目标检测等任务上取得了很好的效果。

2. 3x3卷积+BN 融合

卷积层的计算公式为:

C o n v ( x ) = W ( x ) + b Conv(x)=W(x)+b Conv(x)=W(x)+b

卷积层的参数为:

import torch.nn as nn for parameter in nn.Conv2d(3,3,3).named_parameters(): print(parameter[0]) weight bias

BN层的计算公式为:

B N ( x ) = γ ∗ x − m e a n v a r + β BN(x)=\gamma*\frac{x-mean}{\sqrt{var}}+\beta BN(x)=γ∗var ​x−mean​+β

for parameter in nn.BatchNorm2d(3).named_parameters(): print(parameter[0]) weight bias

这里的 weight 和 bias 是 BN 公式里面的 γ {\gamma} γ 和 β {\beta} β,训练过程中需要学习的参数。

下面将卷积的结果带入BN公式并进行化简:

B N ( C o n v ( x ) ) = γ ∗ C o n v ( x ) − m e a n v a r + β = γ ∗ W ( x ) + b − m e a n v a r + β = γ ∗ W ( x ) v a r + ( γ ∗ ( b − m e a n ) v a r + β ) \begin{align} BN(Conv(x)) &= \gamma*\frac{Conv(x)-mean}{\sqrt{var}}+\beta \\ &= \gamma*\frac{W(x)+b-mean}{\sqrt{var}}+\beta \\ &= \frac{\gamma*W(x)}{\sqrt{var}}+(\frac{\gamma*(b-mean)}{\sqrt{var}}+\beta) \end{align} BN(Conv(x))​=γ∗var ​Conv(x)−mean​+β=γ∗var ​W(x)+b−mean​+β=var ​γ∗W(x)​+(var ​γ∗(b−mean)​+β)​​

化简后 γ ∗ W ( x ) v a r {\frac{\gamma*W(x)}{\sqrt{var}}} var ​γ∗W(x)​ 为融合后卷积的 weight, γ ∗ ( b − m e a n ) v a r + β {\frac{\gamma*(b-mean)}{\sqrt{var}}+\beta} var ​γ∗(b−mean)​+β 为融合后卷积的 bias。

import torch import torch.nn as nn class repconv3x3(nn.Module): def __init__(self, c1, c2): super().__init__() self.conv1 = nn.Conv2d(c1,c2,3,1,1,bias=False) self.bn1 = nn.BatchNorm2d(c2) self.conv_fuse=nn.Conv2d(c1,c2,3,1,1) def fuse_conv_bn(self, conv, bn): bn_mean, bn_var, bn_gamma, bn_beta = ( bn.running_mean, bn.running_var, bn.weight, bn.bias, ) bn_std = (bn_var + bn.eps).sqrt() conv_weight = nn.Parameter((bn_gamma / bn_std).reshape(-1, 1, 1, 1) * conv.weight) conv_bias = nn.Parameter(bn_beta - bn_mean * bn_gamma / bn_std) return conv_weight,conv_bias def forward(self, x): x = self.bn1(self.conv1(x)) return x def forward_fuse(self, x): self.conv_fuse.weight.data,self.conv_fuse.bias.data=self.fuse_conv_bn(self.conv1,self.bn1) return self.conv_fuse(x) inputs = torch.rand((1,1,3,3)) # 重点 模型调到 eval 模式,停止参数更新 model = repconv3x3(1,2).eval() out1 = model.forward(inputs) out2 = model.forward_fuse(inputs) print("difference:",((out2-out1)**2).sum().item()) difference: 2.930988785010413e-14 3. 3x3卷积 + 1x1卷积

这个非常简单,只是将1x1卷积的权重padding成3x3卷积的形状,再与3x3卷积的权重相加。

import torch import torch.nn as nn class repconv3x3(nn.Module): def __init__(self, c1, c2): super().__init__() self.conv1 = nn.Conv2d(c1,c2,3,1,1) self.conv2 = nn.Conv2d(c1,c2,1,1,0) self.conv_fuse=nn.Conv2d(c1,c2,3,1,1) def fuse_1x1conv_3x3conv(self, conv1, conv2): conv1x1_weight = nn.functional.pad(conv2.weight, [1,1,1,1]) conv_weight = conv1x1_weight + conv1.weight conv_bias = conv2.bias + conv1.bias return conv_weight,conv_bias def forward(self, x): x = self.conv1(x) + self.conv2(x) return x def forward_fuse(self, x): self.conv_fuse.weight.data,self.conv_fuse.bias.data=self.fuse_1x1conv_3x3conv(self.conv1,self.conv2) return self.conv_fuse(x) inputs = torch.rand((1,1,3,3)) # 重点 模型调到 eval 模式 model = repconv3x3(1,2).eval() out1 = model.forward(inputs) out2 = model.forward_fuse(inputs) print("difference:",((out2-out1)**2).sum().item()) difference: 2.4980018054066022e-15 4. 1x1卷积 和 3x3卷积 串联

o u t p u t = C o n v 3 x 3 ( C o n v 1 x 1 ( x ) ) output=Conv_{3x3}(Conv_{1x1}(x)) output=Conv3x3​(Conv1x1​(x))

首先将1x1卷积核的第0维和第1维互相调换位置,然后3x3卷积核权重与转置后的1x1卷积核进行卷积操作

import torch import torch.nn as nn class repconv3x3(nn.Module): def __init__(self, c1, c2): super().__init__() self.conv3x3 = nn.Conv2d(c2,c1,3,1,1,bias=False) self.conv1x1 = nn.Conv2d(c1,c2,1,1,0,bias=False) self.conv_fuse=nn.Conv2d(c1,c1,3,1,1,bias=False) def fuse_1x1conv_3x3conv(self, conv3x3, conv1x1): weight=nn.functional.conv2d(conv3x3.weight.data,conv1x1.weight.data.permute(1,0,2,3)) return weight def forward(self, x): x = self.conv3x3(self.conv1x1(x)) return x def forward_fuse(self, x): self.conv_fuse.weight.data =self.fuse_1x1conv_3x3conv(self.conv3x3,self.conv1x1) return self.conv_fuse(x) inputs = torch.rand((1,1,3,3)) # 重点 模型调到 eval 模式 model = repconv3x3(1,2).eval() out1 = model.forward(inputs) out2 = model.forward_fuse(inputs) print("difference:",((out2-out1)**2).sum().item()) difference: 4.90059381963448e-16 5. 1x1Conv、1x3Conv和3x1Conv 并联

将卷积的weight全部padding成3x3卷积的weight,然后将其相加。

import torch import torch.nn as nn class repconv3x3(nn.Module): def __init__(self, c1, c2): super().__init__() self.conv1x1 = nn.Conv2d(c1,c2,1,1,0) self.conv1x3 = nn.Conv2d(c2,c2,(1,3),1,(0,1)) self.conv3x1 = nn.Conv2d(c2,c2,(3,1),1,(1,0)) self.conv_fuse=nn.Conv2d(c1,c2,3,1,1) def fuse_1x1conv_1x3conv_3x1conv(self, conv1, conv2, conv3): weight=nn.functional.pad(conv1.weight.data,(1,1,1,1))+nn.functional.pad(conv2.weight.data,(0,0,1,1))+nn.functional.pad(conv3.weight.data,(1,1,0,0)) bias=conv1.bias.data+conv2.bias.data+conv3.bias.data return weight,bias def forward(self, x): x = self.conv3x1(x)+self.conv1x3(x)+self.conv1x1(x) return x def forward_fuse(self, x): self.conv_fuse.weight.data, self.conv_fuse.bias.data=self.fuse_1x1conv_1x3conv_3x1conv(self.conv1x1,self.conv1x3,self.conv3x1) return self.conv_fuse(x) inputs = torch.rand((1,2,3,3)) # 重点 模型调到 eval 模式 model = repconv3x3(2,2).eval() out1 = model.forward(inputs) out2 = model.forward_fuse(inputs) print("difference:",((out2-out1)**2).sum().item()) difference: 7.327471962526033e-15 6. AvgPooling 转换 Conv

池化层是针对各个输入通道的(对单层特征图进行池化操作),而卷积层会将所有输入通道的结果相加。平均池化层可以等价一个固定权重的卷积层,假设池化核的大小为 K,那么可以设置卷积层权重为 1/K。池化权重另外要注意的是卷积层会将所有输入通道结果相加,所以我们需要对当前输入通道设置固定的权重,对其他通道权重设置为0。

import torch import torch.nn as nn class repconv3x3(nn.Module): def __init__(self, c1): super().__init__() self.avg = nn.AvgPool2d(3,1,1) self.conv_fuse=nn.Conv2d(c1,c1,3,1,1,bias=False) def fuse_avg(self): self.conv_fuse.weight.data[:]=0 for i in range(self.conv_fuse.in_channels): self.conv_fuse.weight.data[i,i,:,:]=1/(torch.prod(torch.tensor(self.conv_fuse.kernel_size))) def forward(self, x): x = self.avg(x) return x def forward_fuse(self, x): self.fuse_avg() return self.conv_fuse(x) inputs = torch.rand((1,2,3,3)) # 重点 模型调到 eval 模式 model = repconv3x3(2).eval() out1 = model.forward(inputs) out2 = model.forward_fuse(inputs) print("difference:",((out2-out1)**2).sum().item()) difference: 1.0658141036401503e-14


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有